skip to main content


Search for: All records

Creators/Authors contains: "Admoni, Henny"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We designed an observer-aware method for creating navigation paths that simultaneously indicate a robot’s goal while attempting to remain in view for a particular observer. Prior art in legible motion does not account for the limited field of view of observers, which can lead to wasted communication efforts that are unobserved by the intended audience. Our observer-aware legibility algorithm directly models the locations and perspectives of observers, and places legible movements where they can be easily seen. To explore the effectiveness of this technique, we performed a 300-person online user study. Users viewed first-person videos of restaurant scenes with robot waiters moving along paths optimized for different observer perspectives, along with a baseline path that did not take into account any observer’s field of view. Participants were asked to report their estimate of how likely it was the robot was heading to their table versus the other goal table as it moved along each path. We found that for observers with incomplete views of the restaurant, observer-aware legibility is effective at increasing the period of time for which observers correctly infer the goal of the robot. Non-targeted observers have lower performance on paths created for other observers than themselves, which is the natural drawback of personalizing legible motion to a particular observer. We also find that an observer’s relationship to the environment (e.g. what is in their field of view) has more influence on their inferences than the observer’s relative position to the targeted observer, and discuss how this implies knowledge of the environment is required in order to effectively plan for multiple observers at once. 
    more » « less
  2. The human-robot interaction community has developed many methods for robots to navigate safely and socially alongside humans. However, experimental procedures to evaluate these works are usually constructed on a per-method basis. Such disparate evaluations make it difficult to compare the performance of such methods across the literature. To bridge this gap, we introduce SocNavBench , a simulation framework for evaluating social navigation algorithms. SocNavBench comprises a simulator with photo-realistic capabilities and curated social navigation scenarios grounded in real-world pedestrian data. We also provide an implementation of a suite of metrics to quantify the performance of navigation algorithms on these scenarios. Altogether, SocNavBench provides a test framework for evaluating disparate social navigation methods in a consistent and interpretable manner. To illustrate its use, we demonstrate testing three existing social navigation methods and a baseline method on SocNavBench , showing how the suite of metrics helps infer their performance trade-offs. Our code is open-source, allowing the addition of new scenarios and metrics by the community to help evolve SocNavBench to reflect advancements in our understanding of social navigation. 
    more » « less
  3. Simulators are an essential tool for behavioural and interaction research on driving, due to the safety, cost, and experimental control issues of on-road driving experiments. The most advanced simulators use expensive 360 degree projections systems to ensure visual fidelity, full field of view, and immersion. However, similar visual fidelity can be achieved affordably using a virtual reality (VR) based visual interface. We present DReyeVR, an open-source VR based driving simulator platform designed with behavioural and interaction research priorities in mind. DReyeVR (read ''driver'') is based on Unreal Engine and the CARLA autonomous vehicle simulator and has features such as eye tracking, a functional driving heads-up display (HUD) and vehicle audio, custom definable routes and traffic scenarios, experimental logging, replay capabilities, and compatibility with ROS. We describe the hardware required to deploy this simulator for under 5000 USD, much cheaper than commercially available simulators. Finally, we describe how DReyeVR may be leveraged to answer an interaction research question in an example scenario. DReyeVR is open-source at this url: https://github.com/HARPLab/DReyeVR 
    more » « less
  4. Shared control systems can make complex robot teleoperation tasks easier for users. These systems predict the user’s goal, determine the motion required for the robot to reach that goal, and combine that motion with the user’s input. Goal prediction is generally based on the user’s control input (e.g., the joystick signal). In this paper, we show that this prediction method is especially effective when users follow standard noisily optimal behavior models. In tasks with input constraints like modal control, however, this effectiveness no longer holds, so additional sources for goal prediction can improve assistance. We implement a novel shared control system that combines natural eye gaze with joystick input to predict people’s goals online, and we evaluate our system in a real-world, COVID-safe user study. We find that modal control reduces the efficiency of assistance according to our model, and when gaze provides a prediction earlier in the task, the system’s performance improves. However, gaze on its own is unreliable and assistance using only gaze performs poorly. We conclude that control input and natural gaze serve different and complementary roles in goal prediction, and using them together leads to improved assistance. 
    more » « less
  5. As assistive robotics has expanded to many task domains, comparing assistive strategies among the varieties of research becomes increasingly difficult. To begin to unify the disparate domains into a more general theory of assistance, we present a definition of assistance, a survey of existing work, and three key design axes that occur in many domains and benefit from the examination of assistance as a whole. We first define an assistance perspective that focuses on understanding a robot that is in control of its actions but subordinate to a user’s goals. Next, we use this perspective to explore design axes that arise from the problem of assistance more generally and explore how these axes have comparable trade-offs across many domains. We investigate how the assistive robot handles other people in the interaction, how the robot design can operate in a variety of action spaces to enact similar goals, and how assistive robots can vary the timing of their actions relative to the user’s behavior. While these axes are by no means comprehensive, we propose them as useful tools for unifying assistance research across domains and as examples of how taking a broader perspective on assistance enables more cross-domain theorizing about assistance. 
    more » « less
  6. null (Ed.)
    Robots are increasingly being introduced into domains where they assist or collaborate with human counterparts. There is a growing body of literature on how robots might serve as collaborators in creative activities, but little is known about the factors that shape human perceptions of robots as creative collaborators. This paper investigates the effects of a robot’s social behaviors on people’s creative thinking and their perceptions of the robot. We developed an interactive system to facilitate collaboration between a human and a robot in a creative activity. We conducted a user study (n = 12), in which the robot and adult participants took turns to create compositions using tangram pieces projected on a shared workspace. We observed four human behavioral traits related to creativity in the interaction: accepting robot inputs as inspiration, delegating the creative lead to the robot, communicating creative intents, and being playful in the creation. Our findings suggest designs for co-creation in social robots that consider the adversarial effect of giving the robot too much control in creation, as well as the role playfulness plays in the creative process. 
    more » « less
  7. null (Ed.)
    Augmentative and alternative communication (AAC) devices enable speech-based communication. However, AAC devices do not support nonverbal communication, which allows people to take turns, regulate conversation dynamics, and express intentions. Nonverbal communication requires motion, which is often challenging for AAC users to produce due to motor constraints. In this work, we explore how socially assistive robots, framed as ''sidekicks,'' might provide augmented communicators (ACs) with a nonverbal channel of communication to support their conversational goals. We developed and conducted an accessible co-design workshop that involved two ACs, their caregivers, and three motion experts. We identified goals for conversational support, co-designed prototypes depicting possible sidekick forms, and enacted different sidekick motions and behaviors to achieve speakers' goals. We contribute guidelines for designing sidekicks that support ACs according to three key parameters: attention, precision, and timing. We show how these parameters manifest in appearance and behavior and how they can guide future designs for augmented nonverbal communication. 
    more » « less
  8. We present the Human And Robot Multimodal Observations of Natural Interactive Collaboration (HARMONIC) dataset. This is a large multimodal dataset of human interactions with a robotic arm in a shared autonomy setting designed to imitate assistive eating. The dataset provides human, robot, and environmental data views of 24 different people engaged in an assistive eating task with a 6-degree-of-freedom (6-DOF) robot arm. From each participant, we recorded video of both eyes, egocentric video from a head-mounted camera, joystick commands, electromyography from the forearm used to operate the joystick, third-person stereo video, and the joint positions of the 6-DOF robot arm. Also included are several features that come as a direct result of these recordings, such as eye gaze projected onto the egocentric video, body pose, hand pose, and facial keypoints. These data streams were collected specifically because they have been shown to be closely related to human mental states and intention. This dataset could be of interest to researchers studying intention prediction, human mental state modeling, and shared autonomy. Data streams are provided in a variety of formats such as video and human-readable CSV and YAML files.

     
    more » « less
  9. Human-robot collaboration systems benefit from recognizing people’s intentions. This capability is especially useful for collaborative manipulation applications, in which users operate robot arms to manipulate objects. For collaborative manipulation, systems can determine users’ intentions by tracking eye gaze and identifying gaze fixations on particular objects in the scene (i.e., semantic gaze labeling). Translating 2D fixation locations (from eye trackers) into 3D fixation locations (in the real world) is a technical challenge. One approach is to assign each fixation to the object closest to it. However, calibration drift, head motion, and the extra dimension required for real-world interactions make this position matching approach inaccurate. In this work, we introduce velocity features that compare the relative motion between subsequent gaze fixations and a nite set of known points and assign fixation position to one of those known points. We validate our approach on synthetic data to demonstrate that classifying using velocity features is more robust than a position matching approach. In addition, we show that a classifier using velocity features improves semantic labeling on a real-world dataset of human-robot assistive manipulation interactions. 
    more » « less